Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure Alloy to be use as a self-service tool for logs (source podlogs) #233

Merged
merged 80 commits into from
Oct 11, 2024

Conversation

TheoBrigitte
Copy link
Member

@TheoBrigitte TheoBrigitte commented Sep 26, 2024

Towards: giantswarm/roadmap#3518

This PR add the capability to dynamically configure log targets using the giantswarm.io/logging label. This label can be used on Pods and Namespaces. This label can take 2 distinct values true or false, when the label is not set its considered false.

  • Pods in any namespace can be labeled
  • Namespace can be labeled, the value used for the namespace will be the default for all pods in this namespace
  • Pods label take precedence over Namespace label, e.g. logging can be enabled for a pod in a namespace where logging is disabled and vice versa.
Examples
Pod logging=true Pod logging=false Pod logging unset
Namespace logging=true ✔️ ✖️ ✔️
Namespace logging=false ✔️ ✖️ ✖️
Namespace logging unset ✔️ ✖️ ✖️
  • logging=true : short version for giantswarm.io/logging=true label
  • logging=false: short version for giantswarm.io/logging=false label
  • logging unset: short version for no giantswarm.io/logging label set

This is achieved by using PodLogs to discover targets, which lead to log lines being tailed through Kubernetes API server instead of fetching them from disk, this has some performance impact being describe here giantswarm/roadmap#3518 (comment). clustering is enabled in order to spread the load across the different Alloy pods/nodes.

The new PodLogs resources added do respect the previous default behaviour:

  • Fetch logs for all pods on all namespaces on MC
  • Fetch logs for all pods in kube-system and giantswarm namespaces on WC

Alloy graph before

image

Alloy graph after

image

@TheoBrigitte TheoBrigitte self-assigned this Sep 26, 2024
@TheoBrigitte
Copy link
Member Author

TheoBrigitte commented Oct 9, 2024

A different Alloy config is now generated when observability-bundle is below or above/equal to 1.7.0.
I also added unit test to asses that correct Alloy config is generated for each combination of observability-bundle version, cluster-type (MC, WC), and other parameters likes default namespaces.

main.go Outdated Show resolved Hide resolved
@hervenicol
Copy link
Contributor

I don't see anything regarding multi-tenancy. Does it mean all logs are sent to the default orgid?

@TheoBrigitte
Copy link
Member Author

I don't see anything regarding multi-tenancy. Does it mean all logs are sent to the default orgid?

Yes I removed the multi-tenancy configuration for now as I could not make it work. Should we add that ? I would need some help on this then.

@hervenicol
Copy link
Contributor

Well, I'm not sure multi-tenancy is needed for a first iteration, but I'm quite confident it's needed before our customers would use this feature.
So maybe this should be an extra step in giantswarm/roadmap#3518 ?

@QuentinBisson
Copy link
Contributor

Maybe they should be sent to the cluster_id tenant for now and we revisit once the multi-tenancy epic is closed. This should be revisited once we start with multi-tenancy for logs right @Rotfuks ?

@TheoBrigitte
Copy link
Member Author

Going with this and we can treat the multi-tenancy and tenant override later on then.

@TheoBrigitte TheoBrigitte enabled auto-merge (squash) October 11, 2024 09:02
@TheoBrigitte TheoBrigitte merged commit b35bde2 into main Oct 11, 2024
8 checks passed
@TheoBrigitte TheoBrigitte deleted the logs-self-service branch October 11, 2024 09:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants